83 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
more
Rights operator: and / or
2023 Journal article Open Access OPEN
MoReLab: a software for user-assisted 3D reconstruction
Siddique A., Banterle F., Corsini M., Cignoni P., Sommerville D., Joffe C.
We present MoReLab, a tool for user-assisted 3D reconstruction. This reconstruction requires an understanding of the shapes of the desired objects. Our experiments demonstrate that existing Structure from Motion (SfM) software packages fail to estimate accurate 3D models in low-quality videos due to several issues such as low resolution, featureless surfaces, low lighting, etc. In such scenarios, which are common for industrial utility companies, user assistance becomes necessary to create reliable 3D models. In our system, the user first needs to add features and correspondences manually on multiple video frames. Then, classic camera calibration and bundle adjustment are applied. At this point, MoReLab provides several primitive shape tools such as rectangles, cylinders, curved cylinders, etc., to model different parts of the scene and export 3D meshes. These shapes are essential for modeling industrial equipment whose videos are typically captured by utility companies with old video cameras (low resolution, compression artifacts, etc.) and in disadvantageous lighting conditions (low lighting, torchlight attached to the video camera, etc.). We evaluate our tool on real industrial case scenarios and compare it against existing approaches. Visual comparisons and quantitative results show that MoReLab achieves superior results with regard to other user-interactive 3D modeling tools.Source: Sensors (Basel) 23 (2023). doi:10.3390/s23146456
DOI: 10.3390/s23146456
Project(s): EVOCATION via OpenAIRE
Metrics:


See at: Sensors Open Access | ISTI Repository Open Access | www.mdpi.com Open Access | CNR ExploRA


2023 Journal article Open Access OPEN
Quantifying the loss of coral from a bleaching event using underwater photogrammetry and AI-Assisted Image Segmentation
Kopecky K. L., Pavoni G., Nocerino E., Brooks A. J., Corsini M., Menna F., Gallagher J. P., Capra A., Castagnetti C., Rossi P., Gruen A., Neyer F., Muntoni A., Ponchio F., Cignoni P., Troyer M., Holbrook S. J., Schmitt R. J.
Detecting the impacts of natural and anthropogenic disturbances that cause declines in organisms or changes in community composition has long been a focus of ecology. However, a tradeoff often exists between the spatial extent over which relevant data can be collected, and the resolution of those data. Recent advances in underwater photogrammetry, as well as computer vision and machine learning tools that employ artificial intelligence (AI), offer potential solutions with which to resolve this tradeoff. Here, we coupled a rigorous photogrammetric survey method with novel AI-assisted image segmentation software in order to quantify the impact of a coral bleaching event on a tropical reef, both at an ecologically meaningful spatial scale and with high spatial resolution. In addition to outlining our workflow, we highlight three key results: (1) dramatic changes in the three-dimensional surface areas of live and dead coral, as well as the ratio of live to dead colonies before and after bleaching; (2) a size-dependent pattern of mortality in bleached corals, where the largest corals were disproportionately affected, and (3) a significantly greater decline in the surface area of live coral, as revealed by our approximation of the 3D shape compared to the more standard planar area (2D) approach. The technique of photogrammetry allows us to turn 2D images into approximate 3D models in a flexible and efficient way. Increasing the resolution, accuracy, spatial extent, and efficiency with which we can quantify effects of disturbances will improve our ability to understand the ecological consequences that cascade from small to large scales, as well as allow more informed decisions to be made regarding the mitigation of undesired impacts.Source: Remote sensing (Basel) 15 (2023). doi:10.3390/rs15164077
DOI: 10.3390/rs15164077
Metrics:


See at: ISTI Repository Open Access | www.mdpi.com Open Access | CNR ExploRA


2022 Journal article Open Access OPEN
On assisting and automatizing the semantic segmentation of masonry walls
Pavoni G., Giuliani F., De Falco A., Corsini M., Ponchio F., Callieri M., Cignoni P.
In Architectural Heritage, the masonry's interpretation is an essential instrument for analysing the construction phases, the assessment of structural properties, and the monitoring of its state of conservation. This work is generally carried out by specialists that, based on visual observation and their knowledge, manually annotate ortho-images of the masonry generated by photogrammetric surveys. This results in vector thematic maps segmented according to their construction technique (isolating areas of homogeneous materials/structure/texture or each individual constituting block of the masonry) or state of conservation, including degradation areas and damaged parts. This time-consuming manual work, often done with tools that have not been designed for this purpose, represents a bottleneck in the documentation and management workflow and is a severely limiting factor in monitoring large-scale monuments (e.g., city walls). This article explores the potential of AI-based solutions to improve the efficiency of masonry annotation in Architectural Heritage. This experimentation aims at providing interactive tools that support and empower the current workflow, benefiting from specialists' expertise.Source: Journal on computing and cultural heritage (Online) 15 (2022). doi:10.1145/3477400
DOI: 10.1145/3477400
Metrics:


See at: ISTI Repository Open Access | dl.acm.org Restricted | Journal on Computing and Cultural Heritage Restricted | CNR ExploRA


2021 Journal article Open Access OPEN
Needs and gaps in optical underwater technologies and methods for the investigation of marine animal forest 3D-structural complexity
Rossi P., Ponti M., Righi S., Castagnetti C., Simonini R., Mancini F., Agrafiotis P., Bassani L., Bruno F., Cerrano C., Cignoni P., Corsini M., Drap P., Dubbini M., Garrabou J., Gori A., Gracias N., Ledoux J. B., Linares C., Mantas T. P., Menna F., Nocerino E., Palma M., Pavoni G., Ridolfi A., Rossi S., Skarlatos D., Treibitz T., Turicchia E., Yuval M., Capra A.
Marine animal forests are benthic communities dominated by sessile suspension feeders (such as sponges, corals, and bivalves) able to generate three-dimensional (3D) frameworks with high structural complexity. The biodiversity and functioning of marine animal forests are strictly related to their 3D complexity. The present paper aims at providing new perspectives in underwater optical surveys. Starting from the current gaps in data collection and analysis that critically limit the study and conservation of marine animal forests, we discuss the main technological and methodological needs for the investigation of their 3D structural complexity at different spatial and temporal scales. Despite recent technological advances, it seems that several issues in data acquisition and processing need to be solved, to properly map the different benthic habitats in which marine animal forests are present, their health status and to measure structural complexity. Proper precision and accuracy should be chosen and assured in relation to the biological and ecological processes investigated. Besides, standardized methods and protocols are strictly necessary to meet the FAIR (findability, accessibility, interoperability, and reusability) data principles for the stewardship of habitat mapping and biodiversity, biomass, and growth data.Source: Frontiers in Marine Science 8 (2021). doi:10.3389/fmars.2021.591292
DOI: 10.3389/fmars.2021.591292
Metrics:


See at: Frontiers in Marine Science Open Access | Recolector de Ciencia Abierta, RECOLECTA Open Access | Archivio istituzionale della ricerca - Alma Mater Studiorum Università di Bologna Open Access | Flore (Florence Research Repository) Open Access | Diposit Digital de la Universitat de Barcelona Open Access | Ktisis Open Access | ISTI Repository Open Access | Frontiers in Marine Science Open Access | Frontiers in Marine Science Open Access | CNR ExploRA


2021 Journal article Open Access OPEN
CHARITY: Cloud for holography and cross reality
Dazzi P., Corsini M.
ISTI-CNR is involved in the H2020 CHARITY project (Cloud for HologrAphy and Cross RealITY), which started in January 2021. The project aims to leverage the benefits of intelligent, autonomous orchestration of a heterogeneous set of cloud, edge, and network resources, to create a symbiotic relationship between low and high latency infrastructures that will facilitate the needs of emerging applications.Source: ERCIM news online edition 126 (2021): 46–47.

See at: ercim-news.ercim.eu Open Access | ISTI Repository Open Access | CNR ExploRA


2021 Journal article Open Access OPEN
Multimodal attention networks for low-level vision-and-language navigation
Landi F., Baraldi L., Cornia M., Corsini M., Cucchiara R.
Vision-and-Language Navigation (VLN) is a challenging task in which an agent needs to follow a language-specified path to reach a target destination. The goal gets even harder as the actions available to the agent get simpler and move towards low-level, atomic interactions with the environment. This setting takes the name of low-level VLN. In this paper, we strive for the creation of an agent able to tackle three key issues: multi-modality, long-term dependencies, and adaptability towards different locomotive settings. To that end, we devise "Perceive, Transform, and Act" (PTA): a fully-attentive VLN architecture that leaves the recurrent approach behind and the first Transformer-like architecture incorporating three different modalities -- natural language, images, and low-level actions for the agent control. In particular, we adopt an early fusion strategy to merge lingual and visual information efficiently in our encoder. We then propose to refine the decoding phase with a late fusion extension between the agent's history of actions and the perceptual modalities. We experimentally validate our model on two datasets: PTA achieves promising results in low-level VLN on R2R and achieves good performance in the recently proposed R4R benchmark.Source: Computer vision and image understanding (Print) 210 (2021). doi:10.1016/j.cviu.2021.103255
DOI: 10.1016/j.cviu.2021.103255
Metrics:


See at: ISTI Repository Open Access | www.sciencedirect.com Restricted | CNR ExploRA


2021 Conference article Open Access OPEN
Evaluating deep learning methods for low resolution point cloud registration in outdoor scenarios
Siddique A., Corsini M., Ganovelli F. And Cignoni P.
Point cloud registration is a fundamental task in 3D reconstruction and environment perception. We explore the performance of modern Deep Learning-based registration techniques, in particular Deep Global Registration (DGR) and Learning Multi-view Registration (LMVR), on an outdoor real world data consisting of thousands of range maps of a building acquired by a Velodyne LIDAR mounted on a drone. We used these pairwise registration methods in a sequential pipeline to obtain an initial rough registration. The output of this pipeline can be further globally refined. This simple registration pipeline allow us to assess if these modern methods are able to deal with this low quality data. Our experiments demonstrated that, despite some design choices adopted to take into account the peculiarities of the data, more work is required to improve the results of the registration.Source: STAG 2021 - Eurographics Italian Chapter Conference, pp. 187–191, Online Conference, 28-29/10/2021
DOI: 10.2312/stag.20211489
Project(s): EVOCATION via OpenAIRE, ENCORE via OpenAIRE
Metrics:


See at: diglib.eg.org Open Access | ISTI Repository Open Access | CNR ExploRA


2021 Journal article Open Access OPEN
TagLab: AI-assisted annotation for the fast and accurate semantic segmentation of coral reef orthoimages
Pavoni G., Corsini M., Ponchio F., Muntoni A., Edwards C., Pedersen N., Sandin S., Cignoni P.
Semantic segmentation is a widespread image analysis task; in some applications, it requires such high accuracy that it still has to be done manually, taking a long time. Deep learning-based approaches can significantly reduce such times, but current automated solutions may produce results below expert standards. We propose agLab, an interactive tool for the rapid labelling and analysis of orthoimages that speeds up semantic segmentation. TagLab follows a human-centered artificial intelligence approach that, by integrating multiple degrees of automation, empowers human capabilities. We evaluated TagLab's efficiency in annotation time and accuracy through a user study based on a highly challenging task: the semantic segmentation of coral communities in marine ecology. In the assisted labelling of corals, TagLab increased the annotation speed by approximately 90% for nonexpert annotators while preserving the labelling accuracy. Furthermore, human-machine interaction has improved the accuracy of fully automatic predictions by about 7% on average and by 14% when the model generalizes poorly. Considering the experience done through the user study, TagLab has been improved, and preliminary investigations suggest a further significant reduction in annotation times.Source: Journal of field robotics (2021). doi:10.1002/rob.22049
DOI: 10.1002/rob.22049
Metrics:


See at: onlinelibrary.wiley.com Open Access | ISTI Repository Open Access | CNR ExploRA


2021 Conference article Open Access OPEN
A deep learning method for frame selection in videos for structure from motion pipelines
Banterle F., Gong R., Corsini M., Ganovelli F., Van Gool L., Cignoni P.
Structure-from-Motion (SfM) using the frames of a video sequence can be a challenging task because there is a lot of redundant information, the computational time increases quadratically with the number of frames, there would be low-quality images (e.g., blurred frames) that can decrease the final quality of the reconstruction, etc. To overcome all these issues, we present a novel deep-learning architecture that is meant for speeding up SfM by selecting frames using predicted sub-sampling frequency. This architecture is general and can learn/distill the knowledge of any algorithm for selecting frames from a video for generating high-quality reconstructions. One key advantage is that we can run our architecture in real-time saving computations while keeping high-quality results.Source: ICIP 2021 - 28th IEEE International Conference on Image Processing, pp. 3667–3671, Anchorage, Alaska, USA, 19-22/09/2021
DOI: 10.1109/icip42928.2021.9506227
Project(s): ENCORE via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2021 Conference article Open Access OPEN
Cloud for holography and augmented reality
Makris A., Boudi A., Coppola M., Cordeiro L., Corsini M., Dazzi P., Andilla F. D., Gonzalez Rozas Y., Kamarianakis M., Pateraki M., Pham T. L., Protopsaltis A., Raman A., Romussi A., Rosa L., Spatafora E., Taleb T., Theodoropoulos T., Tserpes K., Zschau E., Herzog U.
The paper introduces the CHARITY framework, a novel framework which aspires to leverage the benefits of intelligent, network continuum autonomous orchestration of cloud, edge, and network resources, to create a symbiotic relationship between low and high latency infrastructures. These infrastructures will facilitate the needs of emerging applications such as holographic events, virtual reality training, and mixed reality entertainment. The framework relies on different enablers and technologies related to cloud and edge for offering a suitable environment in order to deliver the promise of ubiquitous computing to the NextGen application clients. The paper discusses the main pillars that support the CHARITY vision, and provide a description of the planned use cases that are planned to demonstrate CHARITY capabilities.Source: CloudNet 2021 - IEEE 10th International Conference on Cloud Networking, pp. 118–126, Online event, 8-10/11/2021
DOI: 10.1109/cloudnet53349.2021.9657125
Project(s): CHARITY via OpenAIRE
Metrics:


See at: ZENODO Open Access | doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2021 Conference article Open Access OPEN
TagLab: A human-centric AI system for interactive semantic segmentation
Pavoni G., Corsini M., Ponchio F., Muntoni A., Cignoni P.
Fully automatic semantic segmentation of highly specific semantic classes and complex shapes may not meet the accuracy standards demanded by scientists. In such cases, human-centered AI solutions, able to assist operators while preserving human control over complex tasks, are a good trade-off to speed up image labeling while maintaining high accuracy levels. TagLab is an open-source AI-assisted software for annotating large orthoimages which takes advantage of different degrees of automation; it speeds up image annotation from scratch through assisted tools, creates custom fully automatic semantic segmentation models, and, finally, allows the quick edits of automatic predictions. Since the orthoimages analysis applies to several scientific disciplines, TagLab has been designed with a flexible labeling pipeline. We report our results in two different scenarios, marine ecology, and architectural heritage.Source: Human Centered AI Workshop at NeurIPS 2021 - Thirty-fifth Conference on Neural Information Processing Systems, Online event, 13/12/2021
DOI: 10.48550/arxiv.2112.12702
Metrics:


See at: arXiv.org e-Print Archive Open Access | ISTI Repository Open Access | doi.org Restricted | CNR ExploRA


2020 Journal article Open Access OPEN
A State of the Art Technology in Large Scale Underwater Monitoring
Pavoni G., Corsini M., Cignoni P.
In recent decades, benthic populations have been subjected to recurrent episodes of mass mortality. These events have been blamed in part on declining water quality and elevated water temperatures (see Figure 1) correlated to global climate change. Ecosystems are enhanced by the presence of species with three-dimensional growth. The study of the growth, resilience, and recovery capability of those species provides valuable information on the conservation status of entire habitats. We discuss here a state-of-the art solution to speed up the monitoring of benthic population through the automatic or assisted analysis of underwater visual data.Source: ERCIM news 2020 (2020): 17–18.

See at: ercim-news.ercim.eu Open Access | ISTI Repository Open Access | CNR ExploRA


2020 Journal article Open Access OPEN
On improving the training of models for the semantic segmentation of benthic communities from orthographic imagery
Pavoni G., Corsini M., Callieri M., Fiameni G., Edwards C., Cignoni P.
The semantic segmentation of underwater imagery is an important step in the ecological analysis of coral habitats. To date, scientists produce fine-scale area annotations manually, an exceptionally time-consuming task that could be efficiently automatized by modern CNNs. This paper extends our previous work presented at the 3DUW'19 conference, outlining the workflow for the automated annotation of imagery from the first step of dataset preparation, to the last step of prediction reassembly. In particular, we propose an ecologically inspired strategy for an efficient dataset partition, an over-sampling methodology targeted on ortho-imagery, and a score fusion strategy. We also investigate the use of different loss functions in the optimization of a Deeplab V3+ model, to mitigate the class-imbalance problem and improve prediction accuracy on coral instance boundaries. The experimental results demonstrate the effectiveness of the ecologically inspired split in improving model performance, and quantify the advantages and limitations of the proposed over-sampling strategy. The extensive comparison of the loss functions gives numerous insights on the segmentation task; the Focal Tversky, typically used in the context of medical imaging (but not in remote sensing), results in the most convenient choice. By improving the accuracy of automated ortho image processing, the results presented here promise to meet the fundamental challenge of increasing the spatial and temporal scale of coral reef research, allowing researchers greater predictive ability to better manage coral reef resilience in the context of a changing environment.Source: Remote sensing (Basel) 12 (2020). doi:10.3390/RS12183106
DOI: 10.3390/rs12183106
Metrics:


See at: Remote Sensing Open Access | ISTI Repository Open Access | Remote Sensing Open Access | Remote Sensing Open Access | CNR ExploRA


2020 Contribution to journal Embargo
Foreword to the special section on smart tool and applications for graphics (STAG 2019)
Agus M., Corsini M., Pintus R.
Source: Computers & graphics 91 (2020): A3–A4. doi:10.1016/j.cag.2020.05.027
DOI: 10.1016/j.cag.2020.05.027
Metrics:


See at: Computers & Graphics Restricted | www.sciencedirect.com Restricted | CNR ExploRA


2020 Conference article Open Access OPEN
Another Brick in the Wall: Improving the Assisted Semantic Segmentation of Masonry Walls
Pavoni G., Giuliani F., De Falco A., Corsini M., Ponchio F., Callieri M., Cignoni P.
In Architectural Heritage, the masonry's interpretation is an essential instrument for analyzing the construction phases, the assessment of structural properties, and the monitoring of its state of conservation. This work is generally carried out by specialists that, based on visual observation and their knowledge, manually annotate ortho-images of the masonry generated by photogrammetric surveys. This results in vectorial thematic maps segmented according to their construction technique (isolating areas of homogeneous materials/structure/texture) or state of conservation, including degradation areas and damaged parts. This time-consuming manual work, often done with tools that have not been designed for this purpose, represents a bottleneck in the documentation and management workflow and is a severely limiting factor in monitoring large-scale monuments (e.g.city walls). This paper explores the potential of AI-based solutions to improve the efficiency of masonry annotation in Architectural Heritage. This experimentation aims at providing interactive tools that support and empower the current workflow, benefiting from specialists' expertise.Source: 18th Eurographics Workshop on Graphics and Cultural Heritage, pp. 43–51, Online event, 18-19/11/2020
DOI: 10.2312/gch.20201291
Metrics:


See at: ISTI Repository Open Access | CNR ExploRA


2020 Journal article Closed Access
Challenges in the deep learning-based semantic segmentation of benthic communities from Ortho-images
Pavoni G., Corsini M., Pedersen N., Petrovic V., Cignoni P.
Since the early days of the low-cost camera development, the collection of visual data has become a common practice in the underwater monitoring field. Nevertheless, video and image sequences are a trustworthy source of knowledge that remains partially untapped. Human-based image analysis is a time-consuming task that creates a bottleneck between data collection and extrapolation. Nowadays, the annotation of biologically meaningful information from imagery can be efficiently automated or accelerated by convolutional neural networks (CNN). Presenting our case studies, we offer an overview of the potentialities and difficulties of accurate automatic recognition and segmentation of benthic species. This paper focuses on the application of deep learning techniques to multi-view stereo reconstruction by-products (registered images, point clouds, ortho-projections), considering the proliferation of these techniques among the marine science community. Of particular importance is the need to semantically segment imagery in order to generate demographic data vital to understand and explore the changes happening within marine communities.Source: Applied geomatics (Print) (2020). doi:10.1007/s12518-020-00331-6
DOI: 10.1007/s12518-020-00331-6
Metrics:


See at: Applied Geomatics Restricted | link.springer.com Restricted | CNR ExploRA


2019 Journal article Open Access OPEN
Semantic segmentation of Benthic communities from ortho-mosaic maps
Pavoni G., Corsini M., Callieri M., Palma M., Scopigno R.
Visual sampling techniques represent a valuable resource for a rapid, non-invasive data acquisition for underwater monitoring purposes. Long-term monitoring projects usually requires the collection of large quantities of data, and the visual analysis of a human expert operator remains, in this context, a very time consuming task. It has been estimated that only the 1-2% of the acquired images are later analyzed by scientists (Beijbom et al., 2012). Strategies for the automatic recognition of benthic communities are required to effectively exploit all the information contained in visual data. Supervised learning methods, the most promising classification techniques in this field, are commonly affected by two recurring issues: the wide diversity of marine organism, and the small amount of labeled data. In this work, we discuss the advantages offered by the use of annotated high resolution ortho-mosaics of seabed to classify and segment the investigated specimens, and we suggest several strategies to obtain a considerable per-pixel classification performance although the use of a reduced training dataset composed by a single ortho-mosaic. The proposed methodology can be applied to a large number of different species, making the procedure of marine organism identification an highly adaptable task.Source: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS Annals) 42 (2019): 151–158. doi:10.5194/isprs-archives-XLII-2-W10-151-2019
DOI: 10.5194/isprs-archives-xlii-2-w10-151-2019
Project(s): GreenBubbles via OpenAIRE
Metrics:


See at: ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | ISTI Repository Open Access | ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | www.int-arch-photogramm-remote-sens-spatial-inf-sci.net Open Access | CNR ExploRA


2019 Conference article Closed Access
Image sets compression via patch redundancy
Corsini M., Banterle F., Ponchio F., Cignoni P.
In the last years, the development of compression algorithms for image collections (e.g., photo albums) has become very popular due to the enormous diffusion of digital photographs. Typically, current solutions create an image sequence from images of the photo album to make them suitable for compression using a High Performance Video Coding (HEVC) encoder. In this study, we investigated a different approach to compress a collection of similar images. Our main idea is to exploit the inter- and intra- patch redundancy to compress the entire set of images. In practice, our approach is equivalent to compress the image set with Vector Quantization (VQ) using a global codebook. Our tests show that our clusterization algorithm is effective for a large number of images.Source: EUVIP 2019 - 8th European Workshop on Visual Information Processing, pp. 10–15, Roma, Italy, 28-31 October 2019
DOI: 10.1109/euvip47703.2019.8946237
Metrics:


See at: doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2019 Journal article Closed Access
RELIGHT: a compact and accurate RTI representation for the web
Ponchio F., Corsini M., Scopigno R.
Relightable images have been widely used as a valuable tool in Cultural Heritage (CH) artifacts, including coins, bas-reliefs, paintings, and epigraphs. Reflection Transformation Imaging (RTI), a commonly used type of relightable images, consists of a per-pixel function which encodes the reflection behavior, estimated from a set of digital photographs acquired from a fixed view. Web visualisation tools for RTI images currently require to transmit substantial quantities of data in order to achieve high fidelity renderings. We propose a web-friendly compact representation for RTI images based on a joint interpolation-compression scheme that combines a PCA-based data reduction with a Gaussian Radial Basis Function (RBF) interpolation exhibiting superior performance in terms of quality/size ratio. This approach can be adapted also to other data interpolation schemes, and it is not limited to Gaussian RBF. The rendering part is simple to implement and computationally efficient allowing real-time rendering on low-end devices.Source: Graphical models (Print) 105 (2019). doi:10.1016/j.gmod.2019.101040
DOI: 10.1016/j.gmod.2019.101040
Metrics:


See at: Graphical Models Restricted | www.sciencedirect.com Restricted | CNR ExploRA


2019 Conference article Open Access OPEN
A complete framework operating spatially-oriented RTI in a 3D/2D cultural heritage documentation and analysis tool
Pamart A., Ponchio F., Abergel V., Alaoui M'darhri A., Corsini M., Dellepiane M., Morlet F., Scopign R., De Luca L.
Close-Range Photogrammetry (CRP) and Reflectance Transformation Imaging (RTI) are two of the most used image-based techniques when documenting and analyzing Cultural Heritage (CH) objects. Nevertheless, their potential impact in supporting study and analysis of conservation status of CH assets is reduced as they remain mostly applied and analyzed separately. This is mostly because we miss easy-to-use tools for of a spatial registration of multimodal data and features for joint visualisation gaps. The aim of this paper is to describe a complete framework for an effective data fusion and to present a user friendly viewer enabling the joint visual analysis of 2D/3D data and RTI images. This contribution is framed by the on-going implementation of automatic multimodal registration (3D, 2D RGB and RTI) into a collaborative web platform (AIOLI) enabling the management of hybrid representations through an intuitive visualization framework and also supporting semantic enrichment through spatialized 2D/3D annotations.Source: 8th International Workshop 3D-ARCH "3D Virtual Reconstruction and Visualization of Complex Architectures", pp. 573–580, Bergamo, Italy, 6-8 February 2019
DOI: 10.5194/isprs-archives-xlii-2-w9-573-2019
Metrics:


See at: ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | ISTI Repository Open Access | ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | Hyper Article en Ligne Restricted | CNR ExploRA